Goto

Collaborating Authors

 Endometrial Cancer


CT Radiomics-Based Explainable Machine Learning Model for Accurate Differentiation of Malignant and Benign Endometrial Tumors: A Two-Center Study

Zhang, Tingrui, Wu, Honglin, Jiang, Zekun, Wang, Yingying, Ye, Rui, Ni, Huiming, Liu, Chang, Cao, Jin, Sun, Xuan, Shao, Rong, Wei, Xiaorong, Sun, Yingchun

arXiv.org Artificial Intelligence

Aimed to develop and validate a CT radiomics-based explainable machine learning model for precise diagnosing malignancy and benignity specifically in endometrial cancer (EC) patients. A total of 83 EC patients from two centers, including 46 with malignant and 37 with benign conditions, were included, with data split into a training set (n=59) and a testing set (n=24). The regions of interest (ROIs) were manually segmented from pre-surgical CT scans, and 1132 radiomic features were extracted from the pre-surgical CT scans using Pyradiomics. Six explainable machine learning (ML) modeling algorithms were implemented respectively, for determining the optimal radiomics pipeline. The diagnostic performance of the radiomic model was evaluated by using sensitivity, specificity, accuracy, precision, F1 score, AUROC, and AUPRC. To enhance clinical understanding and usability, we separately implemented SHAP analysis and feature mapping visualization, and evaluated the calibration curve and decision curve. By comparing six modeling strategies, the Random Forest model emerged as the optimal choice for diagnosing EC, with a training AUROC of 1.00 and a testing AUROC of 0.96. SHAP identified the most important radiomic features, revealing that all selected features were significantly associated with EC (P < 0.05). Radiomics feature maps also provide a feasible assessment tool for clinical applications. Decision Curve Analysis (DCA) indicated a higher net benefit for our model compared to the "All" and "None" strategies, suggesting its clinical utility in identifying high-risk cases and reducing unnecessary interventions. In conclusion, the CT radiomics-based explainable ML model achieved high diagnostic performance, which could be used as an intelligent auxiliary tool for the diagnosis of endometrial cancer.


Risk Assessment of Lymph Node Metastases in Endometrial Cancer Patients: A Causal Approach

Zanga, Alessio, Bernasconi, Alice, Lucas, Peter J. F., Pijnenborg, Hanny, Reijnen, Casper, Scutari, Marco, Stella, Fabio

arXiv.org Artificial Intelligence

Artificial Intelligence (AI) has found many applications in medicine [15] and, more specifically, in cancer research [32] in the form of predictive models for diagnosis [14], prognosis [6] and therapy planning [12]. As a subfield of AI, Machine Learning (ML) and in particular Deep Learning (DL) has achieved significant results, especially in image processing [3]. Nonetheless, ML and DL models have limited explainability [13] because of their black-box design, which limits their adoption in the clinical field: clinicians and physicians are reluctant to include models that are not transparent in their decision process [24]. While recent research on Explainable AI (XAI) [11] has attacked this problem, DL models are still opaque and difficult to interpret. In contrast, in Probabilistic Graphical Models (PGMs) the interactions between different variables are encoded explicitly: the joint probability distribution P of the variables of interest factorizes according to a graph G, hence the "graphical" connotation. Bayesian Networks (BNs) [23], which we will describe in Section 3.1, are an instance of PGMs that can be used as causal models. In turn, this makes them ideal to use as decision support systems and overcome the limitations of the predictions based on probabilistic associations produced by other ML models [1, 19].